翻訳と辞書
Words near each other
・ Bellissima
・ Bellissima (film)
・ Bellissima Opera
・ Bellissima!
・ Bellister Castle
・ Bellite
・ Bellitepe, Göle
・ Bellium
・ Belliveau's Cove, Nova Scotia
・ Bellizzi
・ Bellmac 32
・ Bellmaking
・ Bellman
・ Bellman (surname)
・ Bellman and True
Bellman equation
・ Bellman hangar
・ Bellman joke
・ Bellman Prize
・ Bellman pseudospectral method
・ Bellman's Cave
・ Bellman's Head
・ Bellmansro
・ Bellmanstafetten
・ Bellman–Ford algorithm
・ Bellmark Records
・ Bellmawr School District
・ Bellmawr, New Jersey
・ Bellmead, Texas
・ Bellmer Dolls


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Bellman equation : ウィキペディア英語版
Bellman equation
A Bellman equation, named after its discoverer, Richard Bellman, also known as a ''dynamic programming equation'', is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the value of a decision problem at a certain point in time in terms of the payoff from some initial choices and the value of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into simpler subproblems, as Bellman's "Principle of Optimality" prescribes.
The Bellman equation was first applied to engineering control theory and to other topics in applied mathematics, and subsequently became an important tool in economic theory; though the basic concepts of dynamic programming are prefigured in John von Neumann and Oskar Morgenstern's Theory of Games and Economic Behavior and Abraham Wald's Sequential Analysis.
Almost any problem which can be solved using optimal control theory can also be solved by analyzing the appropriate Bellman equation. However, the term 'Bellman equation' usually refers to the dynamic programming equation associated with discrete-time optimization problems. In continuous-time optimization problems, the analogous equation is a partial differential equation which is usually called the Hamilton–Jacobi–Bellman equation.
== Analytical concepts in dynamic programming ==
To understand the Bellman equation, several underlying concepts must be understood. First, any optimization problem has some objective – minimizing travel time, minimizing cost, maximizing profits, maximizing utility, et cetera. The mathematical function that describes this objective is called the ''objective function''.
Dynamic programming breaks a multi-period planning problem into simpler steps at different points in time. Therefore, it requires keeping track of how the decision situation is evolving over time. The information about the current situation which is needed to make a correct decision is called the "state".〔Bellman, R.E. 1957. ''Dynamic Programming''. Princeton University Press, Princeton, NJ. Republished 2003: Dover, ISBN 0-486-42809-5.〕〔S. Dreyfus (2002), ('Richard Bellman on the birth of dynamic programming' ) ''Operations Research'' 50 (1), pp. 48–51.〕 For example, to decide how much to consume and spend at each point in time, people would need to know (among other things) their initial wealth. Therefore, wealth would be one of their ''state variables'', but there would probably be others.
The variables chosen at any given point in time are often called the ''control variables''. For example, given their current wealth, people might decide how much to consume now. Choosing the control variables now may be equivalent to choosing the next state; more generally, the next state is affected by other factors in addition to the current control. For example, in the simplest case, today's wealth (the state) and consumption (the control) might exactly determine tomorrow's wealth (the new state), though typically other factors will affect tomorrow's wealth too.
The dynamic programming approach describes the optimal plan by finding a rule that tells what the controls should be, given any possible value of the state. For example, if consumption (''c'') depends ''only'' on wealth (''W''), we would seek a rule c(W) that gives consumption as a function of wealth. Such a rule, determining the controls as a function of the states, is called a ''policy function'' (See Bellman, 1957, Ch. III.2).〔
Finally, by definition, the optimal decision rule is the one that achieves the best possible value of the objective. For example, if someone chooses consumption, given wealth, in order to maximize happiness (assuming happiness ''H'' can be represented by a mathematical function, such as a utility function), then each level of wealth will be associated with some highest possible level of happiness, H(W). The best possible value of the objective, written as a function of the state, is called the ''value function''.
Richard Bellman showed that a dynamic optimization problem in discrete time can be stated in a recursive, step-by-step form known as backward induction by writing down the relationship between the value function in one period and the value function in the next period. The relationship between these two value functions is called the "Bellman equation". In this approach, the optimal policy in the last time period is specified in advance as a function of the state variable's value at that time, and the resulting optimal value of the objective function is thus expressed in terms of that value of the state variable. Next, the next-to-last period's optimization involves maximizing the sum of that period's period-specific objective function and the optimal value of the future objective function, giving that period's optimal policy contingent upon the value of the state variable as of the next-to-last period decision. This logic continues recursively back in time, until the first period decision rule is derived, as a function of the initial state variable value, by optimizing the sum of the first-period-specific objective function and the value of the second period's value function, which gives the value for all the future periods. Thus, each period's decision is made by explicitly acknowledging that all future decisions will be optimally made.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bellman equation」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.